220 research outputs found

    The complexity of normal form rewrite sequences for Associativity

    Full text link
    The complexity of a particular term-rewrite system is considered: the rule of associativity (x*y)*z --> x*(y*z). Algorithms and exact calculations are given for the longest and shortest sequences of applications of --> that result in normal form (NF). The shortest NF sequence for a term x is always n-drm(x), where n is the number of occurrences of * in x and drm(x) is the depth of the rightmost leaf of x. The longest NF sequence for any term is of length n(n-1)/2.Comment: 5 page

    A Computational Model of Syntactic Processing: Ambiguity Resolution from Interpretation

    Get PDF
    Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fact, the process of ambiguity resolution is almost always unconscious. But it is not infallible, however, as example 1 demonstrates. 1. The horse raced past the barn fell. This sentence is perfectly grammatical, as is evident when it appears in the following context: 2. Two horses were being shown off to a prospective buyer. One was raced past a meadow. and the other was raced past a barn. ... Grammatical yet unprocessable sentences such as 1 are called `garden-path sentences.' Their existence provides an opportunity to investigate the human sentence processing mechanism by studying how and when it fails. The aim of this thesis is to construct a computational model of language understanding which can predict processing difficulty. The data to be modeled are known examples of garden path and non-garden path sentences, and other results from psycholinguistics. It is widely believed that there are two distinct loci of computation in sentence processing: syntactic parsing and semantic interpretation. One longstanding controversy is which of these two modules bears responsibility for the immediate resolution of ambiguity. My claim is that it is the latter, and that the syntactic processing module is a very simple device which blindly and faithfully constructs all possible analyses for the sentence up to the current point of processing. The interpretive module serves as a filter, occasionally discarding certain of these analyses which it deems less appropriate for the ongoing discourse than their competitors. This document is divided into three parts. The first is introductory, and reviews a selection of proposals from the sentence processing literature. The second part explores a body of data which has been adduced in support of a theory of structural preferences --- one that is inconsistent with the present claim. I show how the current proposal can be specified to account for the available data, and moreover to predict where structural preference theories will go wrong. The third part is a theoretical investigation of how well the proposed architecture can be realized using current conceptions of linguistic competence. In it, I present a parsing algorithm and a meaning-based ambiguity resolution method.Comment: 128 pages, LaTeX source compressed and uuencoded, figures separate macros: rotate.sty, lingmacros.sty, psfig.tex. Dissertation, Computer and Information Science Dept., October 199

    Dopamine, uncertainty and TD learning

    Get PDF
    Substantial evidence suggests that the phasic activities of dopaminergic neurons in the primate midbrain represent a temporal difference (TD) error in predictions of future reward, with increases above and decreases below baseline consequent on positive and negative prediction errors, respectively. However, dopamine cells have very low baseline activity, which implies that the representation of these two sorts of error is asymmetric. We explore the implications of this seemingly innocuous asymmetry for the interpretation of dopaminergic firing patterns in experiments with probabilistic rewards which bring about persistent prediction errors. In particular, we show that when averaging the non-stationary prediction errors across trials, a ramping in the activity of the dopamine neurons should be apparent, whose magnitude is dependent on the learning rate. This exact phenomenon was observed in a recent experiment, though being interpreted there in antipodal terms as a within-trial encoding of uncertainty

    Informing Science & IT Education Conference (InSITE)

    Get PDF
    PRACTIS (Privacy Appraising Challenges to Technologies and Ethics) is a research project initiated by the EU. It was carried out over three and one half years by research institutes of six countries: Israel (project coordinator), Poland, Germany, Finland, Belgium, and Austria. PRACTIS was concluded in April 2013 with the submission of a list of recommendations to the EU. PRACTIS focused on three major research tracks: Technological forecast, ethics and legal aspects of privacy, and the changing perception of privacy among younger generations (Internet &quot;natives&quot;). This paper consists of two parts. The first part describes one of the most interesting studies which were carried out within PRACTIS &ndash; the high-school children survey about their perception of privacy. The second part outlines some policy recommendation mostly for governments and regulators. The major conclusion of the high-school survey indicates that there is, indeed, a different perception of privacy among teenagers. For them, the individual sphere in which they wish to protect their privacy is not limited only to their immediate physical environment (home, diary, body), but it is expanded also to their virtual environment such as social networks sites (SNS). They are also willing to trade benefits provided by the digital environment for privacy. &nbsp; The major recommendation conveyed to the EU is that there is no one &quot;deus ex machine&quot; solution to the threats privacy faces due to emerging technologies such as ICT, Genetics, Nanotechnology, Cognitive and Brain Sciences, and the like. There should be a comprehensive strategy and policy and a basket of solutions adhering to technology, law and regulations, organizational issues, education, and social issues. A detailed list of recommendations is exhibited in the article.</p

    Re-growth of stellar disks in mature galaxies: The two component nature of NGC 7217 revisited with VIRUS-W

    Full text link
    Previous studies have reported the existence of two counter-rotating stellar disks in the early-type spiral galaxy NGC7217. We have obtained high-resolution optical spectroscopic data (R ~ 9000) with the new fiber-based Integral Field Unit instrument VIRUS-W at the 2.7m telescope of the McDonald Observatory in Texas. Our analysis confirms the existence of two components. However, we find them to be co-rotating. The first component is the more luminous (~ 77% of the total light), has the higher velocity dispersion (~ 170 km/s) and rotates relatively slowly (projected vmaxv_{max} = 50 km/s). The lower luminosity second component, (~ 23% of the total light), has a low velocity dispersion (~ 20 km/s) and rotates quickly (projected vmaxv_{max} = 150 km/s). The difference in the kinematics of the two stellar components allows us to perform a kinematic decomposition and to measure the strengths of their Mg and Fe Lick indices separately. The rotational velocities and dispersions of the less luminous and faster component are very similar to those of the interstellar gas as measured from the [OIII] emission. Morphological evidence of active star formation in this component further suggests that NGC7217 may be in the process of (re)growing a disk inside a more massive and higher dispersion stellar halo. The kinematically cold and regular structure of the gas disk in combination with the central almost dust-free morphology allows us to compare the dynamical mass inside of the central 500pc with predictions from a stellar population analysis. We find agreement between the two if a Kroupa stellar initial mass function is assumed.Comment: accepted for publication by MNRA

    TraumAID: AI Support in the Management of Multiple Trauma

    Get PDF
    This paper outlines the particular demands that multiple trauma makes on systems designed to provide appropriate decision support, and the ways that these demands are currently being met in our system, TraumAID. The demands follow from: (1) the nature of trauma and the procedures used in its diagnosis, (2) the need to adjust diagnostic and therapeutic procedures to available resource levels, (3) the role of anatomy in trauma and the need for anatomical reasoning, (4) the role of non-specialists in managing trauma, and (5) the competing demands of multiple injuries and the consequent need for planning. We believe that these demands are not unique to multiple trauma, so that the paper may be of general interest to expert system research and development

    TraumAID: Reasoning and Planning in the Initial Definitive Management of Multiple Injuries

    Get PDF
    The TraumAID system has been designed to provide computerized decision support to optimize the initial definitive management of acutely injured patients after resuscitation and stabilization. The currently deployed system, TraumAID 1.0, addresses penetrating injuries to the abdomen and to the chest. Our experience with TraumAID 1.0 has demonstrated some major deficiencies in rule-based reasoners that are faced with problems of both diagnosis and treatment. To address these deficiencies, we have redesigned the system (TraumAID 2.0), factoring it into two modules: (1) a rule-based reasoner embodying the knowledge and logical machinery needed to link clinical evidence to diagnostic and therapeutic goals, and (2) a planner embodying the global knowledge and logical machinery needed to create a plan that addresses combinations of goals. After describing TraumAID 2.0, we discuss an extension of the TraumAID interface (critique mode interaction) that may improve its acceptability in a clinical setting. We close with a brief discussion of management support in resource-limited environments, which is an important issue in the time-critical context of multiple trauma

    Are Milky-Way-like galaxies like the Milky Way? A view from SDSS-IV/MaNGA

    Full text link
    In this paper, we place the Milky Way (MW) in the context of similar-looking galaxies in terms of their star-formation and chemical evolution histories. We select a sample of 138 Milky-Way analogues (MWAs) from the SDSS-IV/MaNGA survey based on their masses, Hubble types, and bulge-to-total ratios. To compare their chemical properties to the detailed spatially-resolved information available for the MW, we use a semi-analytic spectral fitting approach, which fits a self-consistent chemical-evolution and star-formation model directly to the MaNGA spectra. We model the galaxies' inner and outer regions assuming that some of the material lost in stellar winds falls inwards. We also incorporate chemical enrichment from type II and Ia supernovae to follow the alpha-element abundance at different metallicities and locations. We find some MWAs where the stellar properties closely reproduce the distribution of age, metallicity, and alpha enhancement at both small and large radii in the MW. In these systems, the match is driven by the longer timescale for star formation in the outer parts, and the inflow of enriched material to the central parts. However, other MWAs have very different histories. These divide into two categories: self-similar galaxies where the inner and outer parts evolve identically; and centrally-quenched galaxies where there is very little evidence of late-time central star formation driven by material accreted from the outer regions. We find that, although selected to be comparable, there are subtle morphological differences between galaxies in these different classes, and that the centrally-quenched galaxies formed their stars systematically earlier.Comment: 16 pages, 9 figures, MNRAS accepted versio
    • …
    corecore